AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Low VRAM Consumption

# Low VRAM Consumption

Qwen3 Reranker 4B W4A16 G128
Apache-2.0
This is the result of GPTQ quantization on Qwen/Qwen3-Reranker-4B, significantly reducing VRAM usage.
Large Language Model Transformers
Q
boboliu
157
1
Flux 4bit
Other
The Flux model employs a 4-bit Transformer and T5 encoder for text-to-image generation tasks, supporting non-commercial use.
Text-to-Image
F
eramth
302
1
Guanaco 7b Leh V2
Gpl-3.0
A multilingual instruction-following language model based on LLaMA 7B, supporting English, Chinese, and Japanese, suitable for chatbots and instruction-following tasks.
Large Language Model Transformers Supports Multiple Languages
G
KBlueLeaf
474
37
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase